Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce WAL activity by freezing tuples immediately #5890

Merged
merged 1 commit into from
Oct 25, 2023

Conversation

jnidzwetzki
Copy link
Contributor

@jnidzwetzki jnidzwetzki commented Jul 21, 2023

When we compress a chunk, we create a new compressed chunk for storing the compressed data. So far, the tuples were just inserted into the compressed chunk and frozen by a later vacuum run.

However, freezing tuples causes WAL activity can be optimized because the compressed chunk is created in the same transaction as the tuples. This patch reduces the WAL activity by storing these tuples directly as frozen and preventing a freeze operation in the future. This approach is similar to PostgreSQL's COPY FREEZE.


Changed test output

Since we are freezing the tuples during insert, a visibility map is created (>= PG 14). Therefore, the size of the compressed chunk changes by at least one page, since the visibility map needs some space on disk:

## Without freezing
select 
  pg_relation_size(:table_oid, 'main') as main,
  pg_relation_size(:table_oid, 'fsm') as fsm,
  pg_relation_size(:table_oid, 'vm') as vm,
  pg_relation_size(:table_oid, 'init') as init,
  pg_table_size(:table_oid), 
  pg_indexes_size(:table_oid) as indexes,
  pg_total_relation_size(:table_oid) as total;

 main | fsm | vm | init | pg_table_size | indexes | total 
------+-----+----+------+---------------+---------+-------
 8192 |   0 |  0 |    0 |         16384 |       0 | 16384
(1 row)

## With freezing
select 
  pg_relation_size(:table_oid, 'main') as main,
  pg_relation_size(:table_oid, 'fsm') as fsm,
  pg_relation_size(:table_oid, 'vm') as vm,
  pg_relation_size(:table_oid, 'init') as init,
  pg_table_size(:table_oid), 
  pg_indexes_size(:table_oid) as indexes,
  pg_total_relation_size(:table_oid) as total;

 main | fsm |  vm  | init | pg_table_size | indexes | total 
------+-----+------+------+---------------+---------+-------
 8192 |   0 | 8192 |    0 |         24576 |       0 | 24576
(1 row)

Testing

Unfortunately, there is no native PostgreSQL method to get the freeze status of a tuple. However, this can be inspected via the pageinspect extension. Since this extension is not to be guaranteed to be installed during regression checks, the verification was done manually. If this should be done in a regression check, we could implement our own methods for checking the visibility status of a page. However, this requires additional engineering effort.

CREATE EXTENSION pageinspect;
CREATE EXTENSION timescaledb;

CREATE TABLE sensor_data(
time timestamp NOT NULL,
sensor_id integer NOT NULL,
cpu double precision,
temperature double precision);

SELECT FROM create_hypertable(relation=>'sensor_data', time_column_name=> 'time');

ALTER TABLE sensor_data SET (timescaledb.compress, timescaledb.compress_orderby = 'sensor_id, cpu, temperature');

INSERT INTO sensor_data values(now(), 1, 2, 3);

SELECT * FROM chunk_compression_stats('sensor_data');

SELECT compress_chunk(i, if_not_compressed => true)  FROM show_chunks('sensor_data') i;

SELECT t_ctid, raw_flags, combined_flags
 FROM heap_page_items(get_raw_page('_timescaledb_internal.compress_hyper_2_2_chunk', 0)),
   LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)
 WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;

### Without this patch
 t_ctid |              raw_flags               | combined_flags 
--------+--------------------------------------+----------------
 (0,1)  | {HEAP_HASVARWIDTH,HEAP_XMAX_INVALID} | {}
(1 row)

### Using this patch
 t_ctid |                                 raw_flags                                  |   combined_flags   
--------+----------------------------------------------------------------------------+--------------------
 (0,1)  | {HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN}
(1 row)

@codecov
Copy link

codecov bot commented Jul 21, 2023

Codecov Report

Merging #5890 (3d63a0e) into main (65ecda9) will increase coverage by 0.05%.
Report is 6 commits behind head on main.
The diff coverage is 95.23%.

@@            Coverage Diff             @@
##             main    #5890      +/-   ##
==========================================
+ Coverage   65.05%   65.10%   +0.05%     
==========================================
  Files         246      246              
  Lines       56955    56989      +34     
  Branches    12621    12625       +4     
==========================================
+ Hits        37050    37101      +51     
+ Misses      18058    18038      -20     
- Partials     1847     1850       +3     
Files Coverage Δ
src/telemetry/stats.c 80.47% <100.00%> (+0.06%) ⬆️
src/telemetry/telemetry.c 84.68% <100.00%> (+0.05%) ⬆️
src/ts_catalog/catalog.h 100.00% <ø> (ø)
tsl/src/compression/api.c 88.22% <100.00%> (+0.09%) ⬆️
tsl/src/compression/compression.c 91.74% <100.00%> (+0.10%) ⬆️
tsl/src/compression/compression.h 40.00% <ø> (ø)
tsl/test/src/test_compression.c 79.19% <100.00%> (ø)
tsl/src/chunk_copy.c 0.00% <0.00%> (ø)

... and 10 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@mkindahl mkindahl self-requested a review July 24, 2023 09:11
@jnidzwetzki jnidzwetzki force-pushed the freeze_tuples branch 7 times, most recently from f760963 to 4dd4285 Compare September 29, 2023 15:01
@jnidzwetzki jnidzwetzki marked this pull request as ready for review September 29, 2023 15:01
@github-actions
Copy link

@antekresic, @gayyappan: please review this pull request.

Powered by pull-review

@jnidzwetzki jnidzwetzki force-pushed the freeze_tuples branch 2 times, most recently from 03a507a to 0639c45 Compare September 29, 2023 15:15
Comment on lines +504 to +510
* In contrast, when the compressed chunk part is created in the same transaction as the tuples
* are written, the compressed chunk (i.e., the catalog entry) becomes visible to other
* transactions only after the transaction that performs the compression is commited and
* the uncompressed chunk is truncated.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wont this mean that we can rarely benefit from this as we are moving towards precreating chunks and getting rid of truncate due to high locking requirements.

Copy link
Contributor Author

@jnidzwetzki jnidzwetzki Oct 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct. If we precreate a chunk together with the compressed counterpart, we cannot benefit from this optimization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can easily create a policy for pre-creating chunks ... or is the idea doing it in the chunk dispatch code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh... I guess you're talking about this PR: #5849

Copy link
Contributor

@mkindahl mkindahl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see no problems with the patch, but given that we do not know how well it will be used, it might be good to add telemetry information so that we can track the actual usage and see if it is actually used. Especially given the changes mentioned to pre-create chunks.

Comment on lines 519 to +526
cstat = compress_chunk(cxt.srcht_chunk->table_id,
compress_ht_chunk->table_id,
colinfo_array,
htcols_listlen);
htcols_listlen,
insert_options);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to track the number of times this feature is used by adding it to the telemetry. That way, we can see if the feature is being used and understand if it is useful or if it is just in the way and we should remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mkindahl I added the information to the telemetry. Does it provide the information you had in mind?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this provides enough information to see how widely it is used. Thanks for adding it!

after_compression_table_bytes | 8192
after_compression_table_bytes | 16384
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the table larger as a result of this patch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mkindahl Have you seen the Changed test output section in the PR description? Is the information sufficient to explain the changed test output, or should I add more information?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I missed that. That is sufficient.

tsl/test/expected/telemetry_stats-15.out Outdated Show resolved Hide resolved
@jnidzwetzki jnidzwetzki force-pushed the freeze_tuples branch 8 times, most recently from bdc48e7 to 63fdcde Compare October 19, 2023 12:37
sql/pre_install/tables.sql Outdated Show resolved Hide resolved
Copy link
Member

@akuzm akuzm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, although not sure about the catalog changes

When we compress a chunk, we create a new compressed chunk for storing
the compressed data. So far, the tuples were just inserted into the
compressed chunk and frozen by a later vacuum run.

However, freezing tuples causes WAL activity can be optimized because
the compressed chunk is created in the same transaction as the tuples.
This patch reduces the WAL activity by storing these tuples directly as
frozen and preventing a freeze operation in the future. This approach is
similar to PostgreSQL's COPY FREEZE.
@jnidzwetzki jnidzwetzki enabled auto-merge (rebase) October 25, 2023 11:20
@jnidzwetzki jnidzwetzki merged commit 8767de6 into timescale:main Oct 25, 2023
35 checks passed
@jnidzwetzki jnidzwetzki mentioned this pull request Nov 23, 2023
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 23, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**

TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Starting from TimescaleDB 2.13.0:**

* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 23, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**

TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Starting from TimescaleDB 2.13.0:**

* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**

TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Starting from TimescaleDB 2.13.0:**

* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**

TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0:**

* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0:**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* #5575 Add chunk-wise sorted paths for compressed chunks
* #5761 Simplify hypertable DDL API
* #5890 Reduce WAL activity by freezing compressed tuples immediately
* #6050 Vectorized aggregation execution for sum()
* #6062 Add metadata for chunk creation time
* #6077 Make Continous Aggregates materialized only (non-realtime) by default
* #6177 Change show_chunks/drop_chunks using chunk creation time
* #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* #6185 Keep track of catalog version
* #6227 Use creation time in retention/compression policy
* #6307 Add SQL function cagg_validate_query

**Bugfixes**
* #6188 Add GUC for setting background worker log level
* #6222 Allow enabling compression on hypertable with unique expression index
* #6240 Check if worker registration succeeded
* #6254 Fix exception detail passing in compression_policy_execute
* #6264 Fix missing bms_del_member result assignment
* #6275 Fix negative bitmapset member not allowed in compression
* #6280 Potential data loss when compressing a table with a partial index that matches compression order.
* #6289 Add support for startup chunk exclusion with aggs
* #6290 Repair relacl on upgrade
* #6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* #6305 Make timescaledb_functions.makeaclitem strict
* #6332 Fix typmod and collation for segmentby columns
* #6339 Fix tablespace with constraints
* #6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit to jnidzwetzki/timescaledb that referenced this pull request Nov 27, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* timescale#5575 Add chunk-wise sorted paths for compressed chunks
* timescale#5761 Simplify hypertable DDL API
* timescale#5890 Reduce WAL activity by freezing compressed tuples immediately
* timescale#6050 Vectorized aggregation execution for sum()
* timescale#6062 Add metadata for chunk creation time
* timescale#6077 Make Continous Aggregates materialized only (non-realtime) by default
* timescale#6177 Change show_chunks/drop_chunks using chunk creation time
* timescale#6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* timescale#6185 Keep track of catalog version
* timescale#6227 Use creation time in retention/compression policy
* timescale#6307 Add SQL function cagg_validate_query

**Bugfixes**
* timescale#6188 Add GUC for setting background worker log level
* timescale#6222 Allow enabling compression on hypertable with unique expression index
* timescale#6240 Check if worker registration succeeded
* timescale#6254 Fix exception detail passing in compression_policy_execute
* timescale#6264 Fix missing bms_del_member result assignment
* timescale#6275 Fix negative bitmapset member not allowed in compression
* timescale#6280 Potential data loss when compressing a table with a partial index that matches compression order.
* timescale#6289 Add support for startup chunk exclusion with aggs
* timescale#6290 Repair relacl on upgrade
* timescale#6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* timescale#6305 Make timescaledb_functions.makeaclitem strict
* timescale#6332 Fix typmod and collation for segmentby columns
* timescale#6339 Fix tablespace with constraints
* timescale#6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
jnidzwetzki added a commit that referenced this pull request Nov 28, 2023
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* #5575 Add chunk-wise sorted paths for compressed chunks
* #5761 Simplify hypertable DDL API
* #5890 Reduce WAL activity by freezing compressed tuples immediately
* #6050 Vectorized aggregation execution for sum()
* #6062 Add metadata for chunk creation time
* #6077 Make Continous Aggregates materialized only (non-realtime) by default
* #6177 Change show_chunks/drop_chunks using chunk creation time
* #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* #6185 Keep track of catalog version
* #6227 Use creation time in retention/compression policy
* #6307 Add SQL function cagg_validate_query

**Bugfixes**
* #6188 Add GUC for setting background worker log level
* #6222 Allow enabling compression on hypertable with unique expression index
* #6240 Check if worker registration succeeded
* #6254 Fix exception detail passing in compression_policy_execute
* #6264 Fix missing bms_del_member result assignment
* #6275 Fix negative bitmapset member not allowed in compression
* #6280 Potential data loss when compressing a table with a partial index that matches compression order.
* #6289 Add support for startup chunk exclusion with aggs
* #6290 Repair relacl on upgrade
* #6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* #6305 Make timescaledb_functions.makeaclitem strict
* #6332 Fix typmod and collation for segmentby columns
* #6339 Fix tablespace with constraints
* #6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants